人类广泛利用视觉和触摸作为互补的感官,视觉提供有关场景的全球信息,并在操纵过程中触摸当地信息而不会受到阻塞。在这项工作中,我们提出了一个新颖的框架,用于以一种自我监督的方式学习多任务视觉执行表示。我们设计了一种机制,该机制使机器人能够自主收集空间对齐的视觉和触觉数据,这是下游任务的关键属性。然后,我们使用交叉模式对比损失训练视觉和触觉编码器将这些配对的感觉输入嵌入共享潜在空间中。对学习的表示形式进行评估,而无需对5个感知和控制任务进行微调,涉及可变形表面:触觉分类,接触定位,异常检测(例如,手术幻影肿瘤触诊),触觉搜索,例如,视觉疑问(例如,在遮挡的情况下,都可以从视觉询问中进行触觉搜索),以及沿布边缘和电缆的触觉伺服。博学的表示形式在毛巾功能分类上达到了80%的成功率,手术材料中异常检测的平均成功率为73%,视觉引导触觉搜索的平均成功率和87.8%的平均伺服距离沿电缆和服装的平均伺服距离为87.8%。接缝。这些结果表明,学习的表示形式的灵活性,并朝着对机器人控制的任务不合时宜的视觉表达表示迈出了一步。
translated by 谷歌翻译
堆叠提高了架子上的存储效率,但是缺乏可见性和可访问性使机器人难以揭示和提取目标对象的机械搜索问题。在本文中,我们将横向访问机械搜索问题扩展到带有堆叠项目的架子,并引入了两种新颖的政策 - 堆叠场景(DARSS)和Monte Carlo Tree搜索堆叠场景(MCTSSS)的分配区域减少 - 使用Destacking和恢复行动。 MCTSS通过在每个潜在行动后考虑未来的状态来改善先前的LookAhead政策。在1200次模拟和18个物理试验中进行的实验,配备了刀片和吸力杯,这表明命令和重新攻击动作可以揭示目标对象的模拟成功率为82---100%,而在物理实验中获得了66----100%对于搜索密集包装的架子至关重要。在仿真实验中,这两种策略的表现都优于基线,并获得相似的成功率,但与具有完整状态信息的Oracle政策相比采取了更多步骤。在模拟和物理实验中,DARS在中位数步骤中的表现优于MCTSS,以揭示目标,但是MCTSS在物理实验中的成功率更高,表明对感知噪声的稳健性。请参阅https://sites.google.com/berkeley.edu/stax-ray,以获取补充材料。
translated by 谷歌翻译
我们对13个最近的模型进行了全面评估,用于使用两个流行的收藏(MS MARCO文档和Robust04)排名长期文档。我们的模型动物园包括两个专门的变压器模型(例如longformer),它们可以处理长文档而无需分配它们。一路上,我们记录了有关培训和比较此类模型的几个困难。有些令人惊讶的是,我们发现简单的第一个基线(满足典型变压器模型的输入序列约束的截断文档)非常有效。我们分析相关段落的分布(内部文档),以解释这种现象。我们进一步认为,尽管它们广泛使用,但Robust04和MS Marco文档对于基准长期模型并不是特别有用。
translated by 谷歌翻译
最近的工作表明,2臂“ Fling”运动对于服装平滑可能是有效的。我们考虑单臂弹性运动。与几乎不需要机器人轨迹参数调整的2臂fling运动不同,单臂fling运动对轨迹参数很敏感。我们考虑一个单一的6多机器人臂,该机器人臂学习跨越轨迹以实现高衣覆盖率。给定服装抓握点,机器人在物理实验中探索了不同的参数化fling轨迹。为了提高学习效率,我们提出了一种粗到精细的学习方法,该方法首先使用多军匪徒(MAB)框架有效地找到候选动作,然后通过连续优化方法来完善。此外,我们提出了基于Fling Fall结果不确定性的新颖培训和执行时间停止标准。与基线相比,我们表明所提出的方法显着加速学习。此外,由于通过自学人员收集的类似服装的先前经验,新服装的MAB学习时间最多减少了87%。我们评估了6种服装类型:毛巾,T恤,长袖衬衫,礼服,汗衫和牛仔裤。结果表明,使用先前的经验,机器人需要30分钟以下的时间才能为达到60-94%覆盖率的新型服装学习一项动作。
translated by 谷歌翻译
架子通常用于将物体存储在房屋,商店和仓库中。我们制定了最佳架子布置(OSA)的问题,该目标是优化货架上对象的排列,以便在每个对象的访问频率和移动成本下,以获取访问时间。我们提出了一个混合企业计划(MIP)OSA-MIP,表明它在某些条件下找到了OSA的最佳解决方案,并在其一般成本设置中为其次优的解决方案提供了界限。我们在分析上表征了存在的必要且充分的架子密度条件,因此可以在不从架子上删除物体的情况下检索任何对象。来自1,575架模拟货架试验的实验数据和配备有推动刀片和吸入抓握工具的物理fetch机器人的54次试验表明,安排对象可以最佳地将预期的检索成本降低60-80%,以降低预期的搜索和预期的搜索在部分观察到的配置中,成本增加了50-70%,同时将搜索成功率提高到最高2倍。
translated by 谷歌翻译
素描是一种常用于创新过程的自然和有效的视觉通信介质。深度学习模型的最新发展急剧改善了理解和生成视觉内容的机器能力。令人兴奋的发展领域探讨了用于模拟人类草图的深度学习方法,开设创造性应用的机会。本章介绍了开发深受学习驱动的创造性支持工具的三个基本步骤,这些步骤消耗和生成草图:1)在草图和移动用户界面之间生成新配对数据集的数据收集工作; 2)基于草图的用户界面检索系统,适用于最先进的计算机视觉技术; 3)一个对话的草图系统,支持基于自然语言的草图/批判创作过程的新颖互动。在本章中,我们在深度学习和人机互动社区中进行了对相关的事先工作,详细记录了数据收集过程和系统的架构,目前提供了定性和定量结果,并绘制了几个未来研究的景观在这个令人兴奋的地区的方向。
translated by 谷歌翻译
使用单个参数化动态动作操纵可变形物体对蝇钓,宽毯和播放洗牌板等任务非常有用。此类任务作为输入所需的最终状态并输出一个参数化的开环动态机器人动作,它向最终状态产生轨迹。这对于具有涉及摩擦力的复杂动态的长地平轨迹尤其具有挑战性。本文探讨了平面机器人铸造的任务(PRC):其中握住电缆一端的机器人手腕的一个平面运动使另一端朝向所需的目标滑过平面。 PRC允许电缆达到机器人工作区以外的点,并在家庭,仓库和工厂中具有电缆管理的应用。为了有效地学习给定电缆的PRC策略,我们提出了Real2Sim2Real,一个自动收集物理轨迹示例的自我监督框架,以使用差分演进调谐动态模拟器的参数,生成许多模拟示例,然后使用加权学习策略模拟和物理数据的组合。我们使用三种模拟器,ISAAC健身房分段,ISAAC健身房 - 混合动力和Pybullet,两个功能近似器,高斯工艺和神经网络(NNS),以及具有不同刚度,扭转和摩擦的三个电缆。结果每条电缆的16个举出的测试目标表明,使用ISAAC健身房分段的NN PRC策略达到中位误差距离(电缆长度的百分比),范围为8%至14%,表现优于真实或仅培训的基线和政策。只有模拟的例子。 https://tinyurl.com/robotcast可以使用代码,数据和视频。
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
We propose AnyTOD, an end-to-end task-oriented dialog (TOD) system with zero-shot capability for unseen tasks. We view TOD as a program executed by a language model (LM), where program logic and ontology is provided by a designer in the form of a schema. To enable generalization onto unseen schemas and programs without prior training, AnyTOD adopts a neuro-symbolic approach. A neural LM keeps track of events that occur during a conversation, and a symbolic program implementing the dialog policy is executed to recommend next actions AnyTOD should take. This approach drastically reduces data annotation and model training requirements, addressing a long-standing challenge in TOD research: rapidly adapting a TOD system to unseen tasks and domains. We demonstrate state-of-the-art results on the STAR and ABCD benchmarks, as well as AnyTOD's strong zero-shot transfer capability in low-resource settings. In addition, we release STARv2, an updated version of the STAR dataset with richer data annotations, for benchmarking zero-shot end-to-end TOD models.
translated by 谷歌翻译
A long-standing goal of machine-learning-based protein engineering is to accelerate the discovery of novel mutations that improve the function of a known protein. We introduce a sampling framework for evolving proteins in silico that supports mixing and matching a variety of unsupervised models, such as protein language models, and supervised models that predict protein function from sequence. By composing these models, we aim to improve our ability to evaluate unseen mutations and constrain search to regions of sequence space likely to contain functional proteins. Our framework achieves this without any model fine-tuning or re-training by constructing a product of experts distribution directly in discrete protein space. Instead of resorting to brute force search or random sampling, which is typical of classic directed evolution, we introduce a fast MCMC sampler that uses gradients to propose promising mutations. We conduct in silico directed evolution experiments on wide fitness landscapes and across a range of different pre-trained unsupervised models, including a 650M parameter protein language model. Our results demonstrate an ability to efficiently discover variants with high evolutionary likelihood as well as estimated activity multiple mutations away from a wild type protein, suggesting our sampler provides a practical and effective new paradigm for machine-learning-based protein engineering.
translated by 谷歌翻译